Members
Overall Objectives
Research Program
Application Domains
Highlights of the Year
New Software and Platforms
New Results
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Dissemination
Bibliography
XML PDF e-pub
PDF e-Pub


Section: New Software and Platforms

COCO

COmparing Continuous Optimizers

Keywords: Benchmarking - Numerical optimization - Black-box optimization - Stochastic optimization

Scientific Description

COmparing Continuous Optimisers (COCO) [65] is a tool for benchmarking algorithms for black-box optimisation. COCO facilitates systematic experimentation in the field of continuous optimization. COCO provides: (1) an experimental framework for testing the algorithms, (2) post-processing facilities for generating publication quality figures and tables, (3) LaTeX templates of articles which present the figures and tables in a single document.

The COCO software is composed of two parts: (i) an interface available in different programming languages (C/C++, Java, Matlab/Octave, Python) which allows to run and log experiments on a suite of test functions. Several testbeds are provided. (ii) a Python tool for generating figures and tables that can be browsed in html or used in LaTeX templates.

Functional Description

The COCO platform provides the functionality to automatically benchmark optimization algorithms for bounded or unbounded, (yet) unconstrained optimization problems in continuous domains. Benchmarking is a vital part of algorithm engineering and a necessary path to recommend algorithms for practical applications. The COCO platform releases algorithm developers and practitioners alike from (re-)writing test functions, logging, and plotting facilities by providing an easy-to-handle interface in several programming languages. The COCO platform has been developed since 2007 and has been used extensively within the “Blackbox Optimization Benchmarking (BBOB)” workshop series since 2009. Overall, 140+ algorithms and algorithm variants by contributors from all over the world have been benchmarked with the platform so far and all data is publicly available for the research community. A new test suite of bi-objective problems [74] has been used for the BBOB-2016 workshop at GECCO.